An integrated object representation for recognition and grasping
نویسندگان
چکیده
As a step towards systems that can acquire knowledge automatically we have designed a system that can learn new objects with a minimum of user interaction and implemented it on our robot platform GripSee [1]. A novel object is placed into the robot’s gripper in order to define a default orientation and a default grip. The robot then places the object on a turning table and builds up a visual representation that consists of a collection of graphs, labeled with multiscale edges. A user interface that can correct errors in the representation is also part of the system. The visual representation is complemented by a grip library, which contains possible ways of grasping and manipulating the object in a robust manner. We regard this procedure as an example of Human Assisted Learning.
منابع مشابه
Modifying a Conventional Grasping Control Approach for Undesired Slippage Control in Cooperating Manipulator Systems
There have been many researches on object grasping in cooperating systems assuming no object slippage and stable grasp and the control system is designed to keep the contact force inside the friction cone to prevent the slippage. However undesired slippage can occur due to environmental conditions and many other reasons. In this research, dynamic analysis and control synthesis of a cooperating ...
متن کاملScene Representation and Object Grasping Using Active Vision
Object grasping and manipulation pose major challenges for perception and control and require rich interaction between these two fields. In this paper, we concentrate on the plethora of perceptual problems that have to be solved before a robot can be moved in a controlled way to pick up an object. A vision system is presented that integrates a number of different computational processes, e.g. a...
متن کامل2D/3D Object Categorization for Task Based Grasping
Fig. 1. System outline. First row: Data acquisition using ARMAR III robot head and view of a typical experimental scene. Second row: Segmented objects in the same scene. Third row: Integrated 2D and 3D Object Categorization Systems (OCSs). Fourth row: Generation of grasping points by Bayesian network. Fifth row: Experimental scene with grasping points for categorized objects and desired task (t...
متن کاملSimulation Modifies Prehension: Evidence for a Conjoined Representation of the Graspable Features of an Object and the Action of Grasping It
Movement formulas, engrams, kinesthetic images and internal models of the body in action are notions derived mostly from clinical observations of brain-damaged subjects. They also suggest that the prehensile geometry of an object is integrated in the neural circuits and includes the object's graspable characteristics as well as its semantic properties. In order to determine whether there is a c...
متن کاملShape analysis and hand preshaping for grasping
Observations of human grasping 8] 7] have shown two phases: during the reaching phase of grasping, the hand preshapes in order to prepare the \shape matching" with the object to grasp, that is the following adjusting phase. Planning grasping with dextrous robotics hands can not be summarized to these two phases. We have to split the grasping process into several phases (frequently overlapped), ...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 1999